But there's no guarantee that other outlets will be as restrained in their use.News 

AP Establishes Guidelines for AI-Assisted Newsrooms

Today, the Associated Press released guidelines for the use of generative AI in its newsroom. The AP, which has a licensing agreement with OpenAI, provided a sensible and restrictive set of measures regarding this emerging technology. However, the organization warned its staff against utilizing AI to create content for publication. While the guidelines themselves are not contentious, unethical news outlets may interpret the AP’s approval as an endorsement to exploit generative AI in a more excessive or deceitful manner.

The organization’s AI Manifesto emphasizes the belief that AI content should be treated as the fallible tool that it is—not a substitute for trained writers, editors, and editors who use their best judgment. “We don’t see AI as a replacement for reporters in any way,” AP vice president of standards and inclusion Amanda Barrett wrote in an article about its approach to AI today. “It is the responsibility of AP reporters to be responsible for the accuracy and fairness of the information we share.”

The article directs editors to view AI-generated content as “unvetted source material” to which editorial “must apply editorial judgment and AP sourcing standards when considering information to publish.” It says employees can “experiment with ChatGPT with care” but not create publishable content with it. Also includes pictures. “As per our standards, we do not alter any part of the photos, videos or audio,” it states. “That’s why we don’t allow generative AI to add or subtract elements.” However, it created an exception for stories where AI illustrations or art are the subject of the story – and even then it must be clearly marked as such.

Barrett warns of the potential for artificial intelligence to spread misinformation. He says that to prevent the accidental publication of what appears to be an AI-generated AI, AP reporters “should exercise the same caution and skepticism as usual, including trying to identify the source of the original content and doing a Reverse Image Search to verify the image’s origin and checking similar reports from trusted media outlets.” To protect privacy, the guidelines also prohibit authors from entering “confidential or sensitive information into AI tools.”

While this is relatively common sense and an unquestioned rule, other media outlets have been less demanding. CNET was spotted earlier this year publishing false AI-generated financial intelligence articles (only computer-generated if you click on the article’s sidebar). Gizmodo faced a similar spotlight this summer when it published a Star Wars article filled with inaccuracies. It’s not hard to imagine other outlets – desperate to get an edge in the fierce competition – green-lighting the AP’s (strictly limited) use of AI to make bot journalism a central figure in their newsrooms, publishing poorly edited/inaccurate content, or failing to flag AI-produced work as such.

Related posts

Leave a Comment